10 research outputs found

    Matsuoka's CPG With Desired Rhythmic Signals for Adaptive Walking of Humanoid Robots

    Get PDF
    The desired rhythmic signals for adaptive walking of humanoid robots should have proper frequencies, phases, and shapes. Matsuoka's central pattern generator (CPG) is able to generate rhythmic signals with reasonable frequencies and phases, and thus has been widely applied to control the movements of legged robots, such as walking of humanoid robots. However, it is difficult for this kind of CPG to generate rhythmic signals with desired shapes, which limits the adaptability of walking of humanoid robots in various environments. To address this issue, a new framework that can generate desired rhythmic signals for Matsuoka's CPG is presented. The proposed framework includes three main parts. First, feature processing is conducted to transform the Matsuoka's CPG outputs into a normalized limit cycle. Second, by combining the normalized limit cycle with robot feedback as the feature inputs and setting the required learning objective, the neural network (NN) learns to generate desired rhythmic signals. Finally, in order to ensure the continuity of the desired rhythmic signals, signal filtering is applied to the outputs of NN, with the aim of smoothing the discontinuous parts. Numerical experiments on the proposed framework suggest that it can not only generate a variety of rhythmic signals with desired shapes but also preserve the frequency and phase properties of Matsuoka's CPG. In addition, the proposed framework is embedded into a control system for adaptive omnidirectional walking of humanoid robot NAO. Extensive simulation and real experiments on this control system demonstrate that the proposed framework is able to generate desired rhythmic signals for adaptive walking of NAO on fixed and changing inclined surfaces. Furthermore, the comparison studies verify that the proposed framework can significantly improve the adaptability of NAO's walking compared with the other methods

    Towards Ontology Reshaping for KG Generation with User-in-the-Loop: Applied to Bosch Welding

    Full text link
    Knowledge graphs (KG) are used in a wide range of applications. The automation of KG generation is very desired due to the data volume and variety in industries. One important approach of KG generation is to map the raw data to a given KG schema, namely a domain ontology, and construct the entities and properties according to the ontology. However, the automatic generation of such ontology is demanding and existing solutions are often not satisfactory. An important challenge is a trade-off between two principles of ontology engineering: knowledge-orientation and data-orientation. The former one prescribes that an ontology should model the general knowledge of a domain, while the latter one emphasises on reflecting the data specificities to ensure good usability. We address this challenge by our method of ontology reshaping, which automates the process of converting a given domain ontology to a smaller ontology that serves as the KG schema. The domain ontology can be designed to be knowledge-oriented and the KG schema covers the data specificities. In addition, our approach allows the option of including user preferences in the loop. We demonstrate our on-going research on ontology reshaping and present an evaluation using real industrial data, with promising results

    Map Merging with Suppositional Box for Multi-Robot Indoor Mapping

    No full text
    For the map building of unknown indoor environment, compared with single robot, multi-robot collaborative mapping has higher efficiency. Map merging is one of the fundamental problems in multi-robot collaborative mapping. However, in the process of grid map merging, image processing methods such as feature matching, as a basic method, are challenged by low feature matching rate. Driven by this challenge, a novel map merging method based on suppositional box that is constructed by right-angled points and vertical lines is proposed. The paper firstly extracts right-angled points of suppositional box selected from the vertical point which is the intersection of the vertical line. Secondly, based on the common edge characteristics between the right-angled points, suppositional box in the map is constructed. Then the transformation matrix is obtained according to the matching pair of suppositional boxes. Finally, for matching errors based on the length of pairs, Kalman filter is used to optimize the transformation matrix. Experimental results show that this method can effectively merge map in different scenes and the successful matching rate is greater than that of other features

    Study the performance of battery models for hybrid electric vehicles

    No full text
    © 2014 IEEE. This paper studies the performance of battery models for hybrid electric vehicles (HEV). Two battery models are evaluated using plug-and-play powertrain and vehicle development software, Autonomie. The base vehicle model used for testing the performance of battery models is the Prius MY04, a power-split hybrid electric vehicle model in Autonomie. The battery model in Prius MY04 is based on the Thevenin battery model. It does not consider the effects of double layer, diffusion, and coulombs coefficient. To improve battery model, this study includes battery current loss in the battery model. In addition, voltage losses on double layer effect and diffusion effect are included in the improved battery model. Simulation tests are conducted to compare simulated vehicle fuel economy with two battery models to the vehicle fuel economy test data provided by the department of energy (DOE). The simulation results show that the improved battery model has smaller fuel economy errors than the Thevenin battery model comparing with DOE published vehicle test data

    TIF-Reg: Point Cloud Registration with Transform-Invariant Features in SE(3)

    No full text
    Three-dimensional point cloud registration (PCReg) has a wide range of applications in computer vision, 3D reconstruction and medical fields. Although numerous advances have been achieved in the field of point cloud registration in recent years, large-scale rigid transformation is a problem that most algorithms still cannot effectively handle. To solve this problem, we propose a point cloud registration method based on learning and transform-invariant features (TIF-Reg). Our algorithm includes four modules, which are the transform-invariant feature extraction module, deep feature embedding module, corresponding point generation module and decoupled singular value decomposition (SVD) module. In the transform-invariant feature extraction module, we design TIF in SE(3) (which means the 3D rigid transformation space) which contains a triangular feature and local density feature for points. It fully exploits the transformation invariance of point clouds, making the algorithm highly robust to rigid transformation. The deep feature embedding module embeds TIF into a high-dimension space using a deep neural network, further improving the expression ability of features. The corresponding point cloud is generated using an attention mechanism in the corresponding point generation module, and the final transformation for registration is calculated in the decoupled SVD module. In an experiment, we first train and evaluate the TIF-Reg method on the ModelNet40 dataset. The results show that our method keeps the root mean squared error (RMSE) of rotation within 0.5∘ and the RMSE of translation error close to 0 m, even when the rotation is up to [−180∘, 180∘] or the translation is up to [−20 m, 20 m]. We also test the generalization of our method on the TUM3D dataset using the model trained on Modelnet40. The results show that our method’s errors are close to the experimental results on Modelnet40, which verifies the good generalization ability of our method. All experiments prove that the proposed method is superior to state-of-the-art PCReg algorithms in terms of accuracy and complexity

    Hierarchical Visual Place Recognition Based on Semantic-Aggregation

    No full text
    A major challenge in place recognition is to be robust against viewpoint changes and appearance changes caused by self and environmental variations. Humans achieve this by recognizing objects and their relationships in the scene under different conditions. Inspired by this, we propose a hierarchical visual place recognition pipeline based on semantic-aggregation and scene understanding for the images. The pipeline contains coarse matching and fine matching. Semantic-aggregation happens in residual aggregation of visual information and semantic information in coarse matching, and semantic association of semantic edges in fine matching. Through the above two processes, we realized a robust coarse-to-fine pipeline of visual place recognition across viewpoint and condition variations. Experimental results on the benchmark datasets show that our method performs better than several state-of-the-art methods, improving the robustness against severe viewpoint changes and appearance changes while maintaining good matching-time performance. Moreover, we prove that it is possible for a computer to realize place recognition based on scene understanding
    corecore